This work presents an actuation framework for a bioinspired flapping drone called Aerobat. This drone, capable of producing dynamically versatile wing conformations, possesses 14 body joints and is tail-less. Therefore, in our robot, unlike mainstream flapping wing designs that are open-loop stable and have no pronounced morphing characteristics, the actuation, and closed-loop feedback design can pose significant challenges. We propose a framework based on integrating mechanical intelligence and control. In this design framework, small adjustments led by several tiny low-power actuators called primers can yield significant flight control roles owing to the robot's computational structures. Since they are incredibly lightweight, the system can host the primers in large numbers. In this work, we aim to show the feasibility of joint's motion regulation in Aerobat's untethered flights.
translated by 谷歌翻译
Flying animals, such as bats, fly through their fluidic environment as they create air jets and form wake structures downstream of their flight path. Bats, in particular, dynamically morph their highly flexible and dexterous armwing to manipulate their fluidic environment which is key to their agility and flight efficiency. This paper presents the theoretical and numerical analysis of the wake-structure-based gait design inspired by bat flight for flapping robots using the notion of reduced-order models and unsteady aerodynamic model incorporating Wagner function. The objective of this paper is to introduce the notion of gait design for flapping robots by systematically searching the design space in the context of optimization. The solution found using our gait design framework was used to design and test a flapping robot.
translated by 谷歌翻译
飞行动物具有高度复杂的物理特征,能够使用翅膀进行敏捷操作。拍打翅膀会产生影响空气动力的复杂唤醒结构,这可能很难建模。尽管可以使用流体结构相互作用对这些力进行建模,但它在计算上非常昂贵且难以制定。在本文中,我们遵循一种更简单的方法,通过使用相对较少的状态得出空气动力,并以简单的状态空间形式呈现它们。该公式利用PrandTL的提升线理论和Wagner的功能来确定作用在机翼上的不稳定空气动力学,然后将其与蝙蝠风格的机器人的实验数据进行比较,称为Aerobat。可以从该模型中评估模拟的尾边涡流脱落,然后可以分析基于尾流的步态设计方法,以改善机器人的空气动力学性能。
translated by 谷歌翻译
鸟类等动物通过将腿部和空中迁移率与显性惯性作用相结合,广泛使用多模式运动。这种多模式运动壮举的机器人仿生型可以在协商其任务空间的能力方面产生超虚拟系统。本文的主要目的是讨论实现多模式运动的挑战,并报告我们在开发能够多模式运动(腿部和空中运动)的四足动物机器人方面的进展。我们报告了机器人中使用的机械和电气组件,除了为开发多功能多模式机器人平台实现目标的模拟和实验外。
translated by 谷歌翻译
以前已经评估过使用轮毂,无人驾驶飞机,立方体,小萨特人等进行空中和地面操纵,感知和侦察的可行性。在所有这些解决方案中,基于气球的系统具有使其极具吸引力的优点,例如,简单的操作机构和持久的操作时间。但是,在基于气球的应用中,有许多障碍要克服,以实现强大的游荡性能。我们试图确定设计和控制挑战,并提出一个新型的机器人平台,该平台允许在火星陨石坑的侦察和感知中应用气球。这项工作简要涵盖了我们建议的驱动和模型预测控制设计框架,用于转向此类气球系统。我们提出了多个无人接地车辆(UGV)的协调伺服,以调节电缆驱动的气球中的张力,并将其连接到未成熟的悬挂有效载荷上。
translated by 谷歌翻译
我们正在寻求控制设计范例的腿部系统,可以绕过昂贵的算法,这些算法依赖于这些系统中广泛使用的重型电脑,但能够通过使用更便宜的无优化框架来匹配他们可以做的事情。在这项工作中,我们提出了我们在波士顿东北大学(HUSKY Carbon}的Quadrupeal Robot的建模和控制设计中的初步结果,这些机器人在波士顿东北大学(Nu)开发中。在我们的方法中,我们利用了监督控制员和明确的参考调理(ERG)来实施地面反作用力约束。通常使用昂贵的优化强制执行这些约束。但是,在这项工作中,ERG操纵应用于监控控制器的状态参考以通过基于Lyapunov稳定性参数的更新法来强制执行地面接触限制。因此,计算的方法比广泛使用的基于优化的方法更快。
translated by 谷歌翻译
The main objective of Prognostics and Health Management is to estimate the Remaining Useful Lifetime (RUL), namely, the time that a system or a piece of equipment is still in working order before starting to function incorrectly. In recent years, numerous machine learning algorithms have been proposed for RUL estimation, mainly focusing on providing more accurate RUL predictions. However, there are many sources of uncertainty in the problem, such as inherent randomness of systems failure, lack of knowledge regarding their future states, and inaccuracy of the underlying predictive models, making it infeasible to predict the RULs precisely. Hence, it is of utmost importance to quantify the uncertainty alongside the RUL predictions. In this work, we investigate the conformal prediction (CP) framework that represents uncertainty by predicting sets of possible values for the target variable (intervals in the case of RUL) instead of making point predictions. Under very mild technical assumptions, CP formally guarantees that the actual value (true RUL) is covered by the predicted set with a degree of certainty that can be prespecified. We study three CP algorithms to conformalize any single-point RUL predictor and turn it into a valid interval predictor. Finally, we conformalize two single-point RUL predictors, deep convolutional neural networks and gradient boosting, and illustrate their performance on the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) data sets.
translated by 谷歌翻译
With the advent of deep learning application on edge devices, researchers actively try to optimize their deployments on low-power and restricted memory devices. There are established compression method such as quantization, pruning, and architecture search that leverage commodity hardware. Apart from conventional compression algorithms, one may redesign the operations of deep learning models that lead to more efficient implementation. To this end, we propose EuclidNet, a compression method, designed to be implemented on hardware which replaces multiplication, $xw$, with Euclidean distance $(x-w)^2$. We show that EuclidNet is aligned with matrix multiplication and it can be used as a measure of similarity in case of convolutional layers. Furthermore, we show that under various transformations and noise scenarios, EuclidNet exhibits the same performance compared to the deep learning models designed with multiplication operations.
translated by 谷歌翻译
Performance metrics-driven context caching has a profound impact on throughput and response time in distributed context management systems for real-time context queries. This paper proposes a reinforcement learning based approach to adaptively cache context with the objective of minimizing the cost incurred by context management systems in responding to context queries. Our novel algorithms enable context queries and sub-queries to reuse and repurpose cached context in an efficient manner. This approach is distinctive to traditional data caching approaches by three main features. First, we make selective context cache admissions using no prior knowledge of the context, or the context query load. Secondly, we develop and incorporate innovative heuristic models to calculate expected performance of caching an item when making the decisions. Thirdly, our strategy defines a time-aware continuous cache action space. We present two reinforcement learning agents, a value function estimating actor-critic agent and a policy search agent using deep deterministic policy gradient method. The paper also proposes adaptive policies such as eviction and cache memory scaling to complement our objective. Our method is evaluated using a synthetically generated load of context sub-queries and a synthetic data set inspired from real world data and query samples. We further investigate optimal adaptive caching configurations under different settings. This paper presents, compares, and discusses our findings that the proposed selective caching methods reach short- and long-term cost- and performance-efficiency. The paper demonstrates that the proposed methods outperform other modes of context management such as redirector mode, and database mode, and cache all policy by up to 60% in cost efficiency.
translated by 谷歌翻译
We propose a framework in which multiple entities collaborate to build a machine learning model while preserving privacy of their data. The approach utilizes feature embeddings from shared/per-entity feature extractors transforming data into a feature space for cooperation between entities. We propose two specific methods and compare them with a baseline method. In Shared Feature Extractor (SFE) Learning, the entities use a shared feature extractor to compute feature embeddings of samples. In Locally Trained Feature Extractor (LTFE) Learning, each entity uses a separate feature extractor and models are trained using concatenated features from all entities. As a baseline, in Cooperatively Trained Feature Extractor (CTFE) Learning, the entities train models by sharing raw data. Secure multi-party algorithms are utilized to train models without revealing data or features in plain text. We investigate the trade-offs among SFE, LTFE, and CTFE in regard to performance, privacy leakage (using an off-the-shelf membership inference attack), and computational cost. LTFE provides the most privacy, followed by SFE, and then CTFE. Computational cost is lowest for SFE and the relative speed of CTFE and LTFE depends on network architecture. CTFE and LTFE provide the best accuracy. We use MNIST, a synthetic dataset, and a credit card fraud detection dataset for evaluations.
translated by 谷歌翻译